专利摘要:
VIEW OF CUSTOMIZED PRODUCT ON THE SITE. Techniques for viewing a product in the actual location in the environment where the product is in the actual location in the environment where the product is to be used or displayed are described. An implementation of the approaches described herein can be used in the context of a computerized system that can receive and store digital images, receive a request to manufacture a product characterized as customized including an identification of an image to be framed and a type of frame. cardboard and / or board, and that shows an image of the product preview characterized as customized that simulates the real appearance of the product, in the most real way possible. With this system, the forecast image can be highly realistic under ideal lighting and display conditions.
公开号:BR102012021901B1
申请号:R102012021901-8
申请日:2012-08-30
公开日:2020-10-27
发明作者:Leslie Young Harvill;Richard Harold Bean;Robert Irven Beaver Iii
申请人:Zazzle Inc;
IPC主号:
专利说明:

FUNDAMENTALS
Certain approaches described in certain sections of this disclosure and identified as “funds” or “previous approaches” are approaches that can be targeted, but not necessarily approaches that have been previously designed or targeted. Therefore, unless otherwise stated, it should be assumed that any of the approaches that are described in this way really qualify as prior art, simply by virtue of the identification as "foundations" or "earlier approaches."
Various automated computer systems are currently available with which users or end consumers of products can design, preview and order custom-made products that incorporate images or graphics. Examples of products include clothing for use, beverage vessels and accessory items. In a typical system, a user or end consumer uses a multi-purpose computer terminal, such as a personal computer with a browser, to connect a public network to a server computer. The user selects a stored graphic image, or uploads a digital image that the user obtained or made. The user selects a type of product for which the graphic image should be applied and specifies various parametric values referring to the product such as color, size, location of the image placement or others. The computer or server terminal generates a retribution image showing how the product should appear after customized manufacture with the specified image applied. The user approves the retribution image and places an order for the product. A manufacturer receives the order data, manufactures the product as specified and supplies the customized product manufactured to the user.
One type of product of interest — not offered in typical previous systems — is framed or assembled materials. A frame may comprise a wooden frame, metal or plastic parts. The assembly may include one or more cardboard frames or may comprise floating assembly. Materials can include digital images from film photographs, original digital art, prints, paintings, animation cells or any other graphic work or visual art work. Individualized in-line design and custom fabrication of this framed and assembled material is either impossible or imperfect using existing systems. BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a flow chart of a process for viewing a customized product on site;
FIG. 2 illustrates an exemplary marker.
FIG. 3 illustrates an exemplary marker.
FIGS. 4A-B (collectively FIG. 4) are a flow chart of a process for the characterization of a user site with a marker.
FIG. 5 is a flow chart of a process for creating a visual component with the data found from the user's website.
FIG. 6 is a block diagram illustrating a computer system with which the techniques in the present can be constituted. DESCRIPTION OF EXAMPLARY ACHIEVEMENTS
In the following description, for the sake of clarity, several specific details are placed to provide a complete understanding of the present invention. However, it will be apparent that the present invention can be practiced without these specific details. In other examples, well-known block diagram structures and devices are shown to avoid unnecessary obscuration of the present invention.
The terms "comprising," "comprises," and "comprise," as used herein, which are synonymous with "including," "containing," or "characterized by," are inclusive or open-ended and do not mean the exclusion of additional elements, not mentioned or methods. VIEWING A CUSTOMIZED PRODUCT ON THE SITE
An embodiment of the present approaches can be used in the context of a computerized system that can receive and store digital images, receive a request to manufacture a product characterized as customized including an identification of an image to be framed and a type of cardboard frame and / or frame, and that shows an image of the product preview characterized as customized that simulates the real appearance of the product, in the most real way possible. With this system, the forecast image can be highly realistic under ideal lighting and display conditions. However, the appearance of actual framed images can vary widely in different environments. For example, custom framed products are typically displayed hanging on a wall, but the appearance of the product can vary widely in environments such as interior rooms with different lighting levels, types of lighting, types of walls, wallpaper, reflective surfaces or other environments background.
Framed cardboard elements and frames are represented in 3D models with parameterized values to allow resizing and use with different visual materials. For example, 3D models of frame elements can be prepared by placing the material glued to the real frame on a fixed easel adjacent to a first surface mirror; a laser is projected at a known angle against the surface of the material glued to the frame, a digital image of the impression is formed together with the laser line and a programmed computer deducts, from the laser line, the geometry of a front surface of the material glued to the frame, the rear profile being obtained from the first surface mirror. A subsequent image is taken with the laser line removed, to capture the actual surface texture of the impression. The resulting perspective view of the molding surface texture is planned to allow the subsequent mapping of the planned texture into a computer generated 3D model of the molding. For cardboard frames, the actual thickness can be measured manually and entered as a parametric value, and a digital image in plan view of the texture of the cardboard frame can be taken and used in 3D texture mapping.
In one embodiment, the preview image of a product characterized as customized can be modified in a way that closely resembles the actual appearance that the customized framed product will have in a given environment.
The present approaches offer several benefits compared to the previous approaches. For example, the design of the exemplary markers shown here and the nature of the recognition are different for the characterization of the geometry of space. The design of the exemplary markers and the processing logic described here allows for both the characterization of the geometry as well as the lighting. This robust characterization allows the guarantee that the geometry of a visualized product is accurate in the characterized environment. In addition, the logic in this can adjust the nature of the presentation to compensate for the color or lighting of the user's environment based on a user image of a single marker and a photograph provided by a single user.
In addition, the system (s) here accommodates the dynamic nature of customized manufactured products, which can be configured both in the nature of the assembly and also in the nature of the decoration. The system (s) contemplate the sharing of these environments characterized in an online market in conjunction with the product configured / designed to be viewed on site. The “complete” nature of the system (s) includes the characterization of the product for the configuration / decoration, allowing users to configure / decorate and visualize the resulting achievements in characterized environments.
With the purpose of illustrating the system and visualization method on site, the achievements described in this one refer to a product characterized as customized. However, the system and method of on-site viewing can also be used to view other customized assembly and display products, for which it is desirable to provide on-site viewing of the customized product to users. Examples of other customized products for which the on-site visualization system and method may apply include custom-made products with images or text provided by the user such as a custom skateboard, a custom globe, a custom baseball bat, a custom car roof and products on which custom ornaments have been fitted, for example, custom clothing or other ornamentable product. PROCESS FOR VIEWING A CUSTOMIZED PRODUCT ON THE SITE
With reference to FIG. 1, in one embodiment, a process for processing data comprises the following steps in general:
A digital representation of a marker is transmitted (block 101) to a user. For example, the user, who may be an end consumer of a commercial service for a custom manufactured product, uses a computer terminal to connect to a server computer associated with the service. The user establishes an account with the service or logs into an existing account. The user starts a custom product design process. The user is prompted to download or print a digital file, such as a PDF document or graphic image file, containing the representation of the marker.
The user prints (block 102) the marker on a sheet of paper. In one embodiment, the printed size of the sheet of paper is stored in the service in association with the data describing the marker. For example, the service may store metadata that indicates that a particular marker is 8% x 11 inches, or the metric size A4, or any other suitable size, and the user will be prompted or asked to print the marker on a sheet of that size.
The user positions (block 103) the paper with the marker in his environment in a location where the user wants to view the customized manufactured product. For example, the user attaches the sheet of paper to a wall on which the user plans to assemble or display a customizable product.
The user takes (block 104) a digital photograph of the marker on the spot. In this context, "on site" means the actual location in the environment where the customized product is to be used or displayed.
The user transmits (block 105) the photograph to an In-Situ Visualization service.
As further described here, the service uses the marker to characterize (block 106) the position, orientation and lighting of the user's photograph.
The service produces (block 107) a digital value that displays the customized product on site. The digital value can be produced in such a way that the customized product, as viewed by the digital value, reflects the detected position and orientation of the marker in the user's photograph and the lighting in the actual location of the marker. For example, the digital value can be a digital graphic image that the service can produce to be displayed on the user's computer terminal, to give the user a simulated view of the actual appearance of the customized manufactured product as if assembled or actually displayed in the environment. where the user has previously positioned the sheet. Instead of a digital image, the digital value can be a digital video, a digital audiovisual program or a customized graphic product model.
The component aspects of the preceding general process are now described. HIGHLIGHTER
In one embodiment, a marker can have the following characteristics. The marker can have one or more linear components that can be recognized, using image recognition techniques, like lines in a photograph taken by a digital camera. For example, in one embodiment, the marker comprises a plurality of lines that are typically 0.25 ”to 0.5” inches wide or thick. Linear components of these sizes are expected to appear sufficiently thick or bold in a user image to allow computerized recognition of lines in the user image, even in the presence of user environmental background elements such as wall textures, other assembled materials , doors, corners of walls, floors and other elements. Lines that are very thin as part of the marker can be difficult to recognize, whereas lines that are very thick can be difficult to position precisely in space in relation to the environment.
In one embodiment, the marker has a limit when printed and photographed, so that the linear components are isolated from the other elements of the picture in the environment. The limit can be an empty margin. Thus, in one embodiment, an empty margin separates the linear components from an edge of a printed sheet showing the marker. Therefore, the border allows better recognition of the marker in relation to the environment and breaks or separates the connectivity of the linear components from the other elements of the image that are not part of the marker.
In one embodiment, the linear components are arranged to form a graph of connectivity. The connectivity graph is an association of arcs that are connected at points called nodes to form a plurality of internal regions called polytopes. In one embodiment, each particular marker has a particular connectivity graph with different connectivity when compared to other examples of markers determined by a plurality of characteristics. Exemplary features that can differentiate one connectivity graph from another include aspects of line intersections, number of lines and number of internal regions. Achievements do not require the use of any particular marker format or style; for example, while an example revealed in the present has the general appearance of a rectangular grid, many other geometric arrangements can be used. What is important is that the service stored metadata that describes a reference connectivity graph that must be seen in the digital image of the marker and in the user's environment.
In one embodiment, the format of the marker's connectivity graph has a different orientation. For example, each marker is endowed with one or more characteristics so that changing the orientation or rotation of the marker produces a different visual appearance. This feature allows a computer analysis of the user's digital image to determine the actual orientation that was used for the marker when it was placed in the user's environment.
In one embodiment, the spatial relationships of the connectivity graph are recorded, and used as a means of detecting the position and orientation of the marker in the photograph. For example, detection may involve seeking to recognize known characteristics of nodes, lines and polytopes in a reference marker that matches the same characteristics in the user's digital image.
In one embodiment, the characteristics of the nodes include a count of the nodes throughout the marker graph, a count of the arcs that connect at a given node, and a adjacency of a node for polytopes having a given node count. These node characteristics can be used to differentiate one connectivity graph from another. That is, if the count of nodes, the count of arcs that connect in a given node, and an adjacency to a count of the polytopes of a given count of nodes are known, then the same characteristics can be identified when the digital image of the user is processed, and the marker can be recognized in the user's digital image only when the counts and adjacency match.
In one embodiment, the characteristics of the lines can also be used for detection and differentiation. In one embodiment, the relevant features include the number of lines (arcs) or the count of arcs in the marker graph and the adjacency of each line for the polytopes of a given arc count.
In one embodiment, the characteristics of the regions or internal polytopes can also be used for detection and differentiation. In one embodiment, the characteristics relevant to the number of internal regions (polytopes) include a count of the polytopes on the marker graph and a count of the nodes in each polytope.
In certain embodiments, the graph of connectivity of the lines can also be read by the user as a symbol, graphic or legend, as a brand or trademark of a company.
In one embodiment, one or more open spaces are provided in the printed marker and can be unimpressed or printed with light colors or tones that provide a means for detecting the lighting of the user's site. Open spaces can be called “light sampling points”. In addition, the full printable areas of the marker line graph are known and can be called “dark sampling points”. If “light sampling points” and “dark sampling points” are detected in a user image of the marker in the environment, then based on the luminance values or other data representing the sampling points, the computer can determine a luminous gradient that exists between the sampling points and can modify the appearance of a digital value to simulate real lighting in the user's environment.
Colors can comprise black, white and gray, in one embodiment and can facilitate different types of image analysis. For example, if the computer cannot detect a gray space in a candidate marker in the user's image, then the computer may determine that the user's image has an excessive level of white or is “off” and needs to be resumed to allow accurate recognition.
Lighting in an environment may appear to have a color bias when recorded by a digital device such as a digital camera. This trend occurs because the light that illuminates the environment can be one or more varieties of different types that include sunlight, incandescent, mercury vapor, fluorescent, etc. that have certain spectral distributions that the human eye sees as white, but that the digital device registers as a certain color.
In one embodiment, the marker includes a gray area of medium shade that allows an accurate recognition of a lighting trend in the user's image. In addition, or alternatively, shades of pastel colors can be used to help the user recognize a color trend in lighting the user's environment. For example, it may be useful to include a well-known shade of green or a shade of pink in the selected areas of the reference marker to assist in recognizing whether the user's environment is mainly lit using fluorescent lamps or incandescent lamps and applying a color trend similar to digital value that simulates the manufactured product customized in the environment under the same lighting. EXEMPLARY MARKERS
FIG. 2 and FIG. 3 illustrate examples of markers. First with reference to FIG. 2, in one embodiment, a marker resembles a trademark of a commercial entity, in this case, the Z logo of Zazzle Inc., Redwood City, California. The marker 202 comprises a plurality of arcs 204. The exemplary nodes 206A, 206B are at the intersections of the arcs, and the marker defines a plurality of polytopes whose examples are the polytopes 208A, 208B, 208C. The corner parts 210 of the marker 202 are not uniform with respect to the intersection shape of the arc, so that an orientation of the marker can be detected using computer image analysis techniques.
The arc count associated with a given node also varies; for example, node 206A is at an intersection of four (4) arcs, whereas node 206B is at an intersection of three (3) arcs. Therefore, when marker 202 is recognized in a user image, the marker can be characterized in terms of the number of nodes and the count of arcs in each node and compared to the reference data that describe a reference marker to determine whether the combination occurs. The marker 202 can also be characterized by the number of adjacent polytopes associated with a node; for example, node 206A is associated with four (4) adjacent polytopes whereas node 206B has three (3) adjacencies. In addition, the characterization data for a given marker allows efficient image processing; for example, an image recognition algorithm can be configured to reject a candidate item recognized in a user image as a potential combination marker the first time it is determined that an item characterization does not match the reference marker. For example, when the computer operates to recognize a candidate item, as soon as the computer determines that the candidate item has very few or many arcs, nodes, or polytopes, the candidate item can be rejected and the process can proceed with consideration. another candidate item.
The number of items for characterizing a marker is preferably relatively small to avoid requiring unnecessary large amounts of data processing time. For example, it is known that when a marker is complex and has a large number of arcs, nodes and polytopes, the processing time and storage space required for accurate recognition of the marker can become prohibitive. Therefore, markers with relatively simple connectivity graphics are preferred.
As another example, in FIG. 3, a marker looks like a grid of rectangles. The arrangement of FIG. 3 offers the benefit of adapting a sheet of wallpaper with a rectangular letter size.
And both FIG. 2, FIG. 3, the marker includes a blank border around the perimeter of the marker, lines large enough to detect a user image, and other features such as lines, intersections and internal regions that are uniquely recognizable against backgrounds. In addition, FIG. 2, FIG. 3 represent markers that incorporate shapes or graphics that are uncommon in a natural environment, which improves the performance of the present recognition process.
In various embodiments, the service can provide a bookmark that is particular to the user or the end customer, or that can provide a plurality of different bookmarks that the end user can select and download. For example, different labels can be associated with or linked to different products, services, users or classes of products. For example, different products may have different sizes and the user may wish to view two different products of different sizes in the same general environment; in this case, the service can provide two different markers of different sizes. Different products of different sizes can also guarantee the use of different markers. For example, a custom painted or printed stretch canvas product may use a different type of marker in relation to a custom decorated skateboard deck. LOCAL VIEWING SERVICE
In one embodiment, a computerized on-site viewing service comprises one or more computer programs or other software elements that are configured to perform the following general tasks: characterizing the user's site with the marker; creation of a visual component using the data found from the user's website and a photograph or other digital image; presentation of the digital value. PROCESS FOR CHARACTERIZING THE USER'S SITE WITH A MARKER
In one embodiment, the characterization of the user's website with the marker generally comprises the digital recognition of a connected graph based on a reference graph using a process illustrated in flowchart format in FIG. 4.
First, suppose as described above, that the user has made a printed copy (block 102) of a marker, placed (block 103) the printed marker in the user's environment in a location where a customized product will be displayed or assembled, taken ( block 104) a photograph or digital image of the environment, including the marker, and the user's photograph for the service is loaded (block 105). For example, the user's photograph may be a digital image of a part of the interior of a room where the marker has been attached to a wall.
The process of FIG. 4 may consist of computer logic to recognize the marker in the user's photograph, for example, as part of using (block 106) the user's photograph of the marker to characterize the user's photo, the location and orientation of the marker, and lighting at the marker location:
A linear image is produced by filtering (block 401) of the user's photograph, so that the linear characteristics in the size range of the marker lines are left and other non-linear characteristics are filtered. For example, a limited bandpass filter or an edge filter can be used. The result is an image produced that when displayed comprises only linear characteristics in the size range of the marker lines as black on a white background.
The linear image is further filtered (block 402) in a Boolean set of pixels using cellular automata, so that the linear elements have one (1) pixel in width, and each line is represented in the image by its pixels being established as true. Exemplary tables of values for cellular automata are provided in the example of pseudocode below. The cellular automata approach uses a regulated system with boundary neighborhood entries. In the cellular automata approach, the neighboring pixels of a given pixel under consideration form instructions or opcodes for an automaton that produces a resulting pixel value based on the input, and the given pixel is then replaced by the resulting pixel value. Unlike previous orders for cellular automata, in this approach, cellular automata are applied for thinning the line. An example of a pseudocode of the cellular automata approach is provided below.
The set of pixels is traversed (block 403). When a true pixel is found, a candidate graph is constructed covering the connected pixels. For example, when connected pixels are identified, then a node is recognized. If no true pixels are found, the algorithm ends. As the candidate graph is created and saved in memory, if the counts of nodes, arcs or polytopes are greater than those of the reference graph, the candidate graph is placed, and the stored values of all the connected pixels of the present network are set as false. of line. In one embodiment, the candidate graph and the reference graph are represented on a computer using a winged edge data structure. Other data structures and models can be used to represent the candidate connectivity graph and the reference connectivity graph and the invention is not limited to the winged edge data structure.
By building and using connectivity graphs, the process can quickly discard candidate graphs that do not meet one or more connectivity criteria in the reference graph. This process is different from other approaches in which the complete recognition and characterization of a candidate graphic in the user's image may be necessary. For example, in this approach, there is no need to complete the recognition of a candidate chart that becomes excessively large; it is simply discarded at the first opportunity, increasing performance and reducing the time to recognize the marker. At the end of the candidate graph, if the counts of the nodes, arcs or polytopes are lower than those of the reference graph, the candidate graph is discarded.
If a complete set of connectivity characteristics of the candidate chart matches the reference block (block 404), the algorithm remains at block 407. If a candidate chart is discarded or discarded and there are no more true pixels in the pixel set (block 405 ), the continuation of the pixel set continues at block 403. Otherwise, the algorithm ends (block 406) possibly with a notification to the user that the marker could not be detected in the user's photograph.
When a combination exists in the candidate graph, the orientation and position of the combination graph in the user's photo are found by calculating (block 407) the marker transform, which maps the known nodes in the reference graph to find nodes in the combination graph. Thus, when a matching connectivity graph is identified, the pixel coordinates within the user's image of the nodes, arcs and polytopes are known, and can be mapped using the marker transform to the reference graph. Point mapping techniques can be used using the decomposition of a singular value, for example, to determine the marker transform.
When the marker transform is determined, light sampling points (block 408) can be found in the photograph. These points are used to determine a white point for the image, and a luminance gradient or map for the presentation of the visual component. For example, the coordinates in the reference space of a first light sampling point can be transformed, using the marker transform, to equivalent points in the user image space; at those points, pixel values can be sampled or obtained to determine a basic white value for the user's image. In one embodiment, the luminance gradient is a set of values that represents a range of the magnitude of light reflected in the user's environment, and can be represented by a set of delta values in the image space, for example, ΔueΔv values.
A marker transform (block 409) can also be used to find the dark sampling points in the user's image, which are used to establish a black point for the visual component presentation. Thus, the information can be extrapolated in the user's environment, including its geometry and lighting, and appropriate changes can be applied to the image in terms of chromatic spectrum, luminance, brightness, and other attributes so that the image appears on the computer screen. user, as similar as possible to the real appearance of the customized manufactured product when installed in the user environment. PROCESS TO CREATE A VISUAL COMPONENT WITH DATA FOUND FROM THE USER'S SITE
In one embodiment, the creation of a visual component using data found from the user's website and a photograph or other digital image may involve the steps illustrated in the flowchart of FIG. 5.
Initially, a visual component is built using layers as follows. The user's photograph is adjusted (block 501) using the data obtained from the light sampling points and the dark sampling points.
A customized product reference (block 502) is placed on the user's photo using the marker transformation for placement; the customized product reference can comprise a name or a unique identifier, a geometrical place support as a rectangle within a coordinate system, and that coordinate system transformed using a marker transform, which represents the customized manufactured product in which the user be interested.
The luminance gradient is applied (block 503) to modify the luminance of the customized product to match the light gradient of the user's photograph based on a known luminance point in the user's image space.
Second, the customized product is displayed (block 504) using the following steps. In a realization, the user chooses the customized product and its attributes that interact with the service. In one embodiment, the visual component is loaded at the user's location. In a realization, the customized product presentation component is configured. In one realization, the customized product reference is established in the Customized Product component.
Finally, in one embodiment, the on-site component is placed and sent to the display unit or the user's browser. IMPLEMENTATION OVERVIEW — MECHANISM HARDWARE.
According to one embodiment, the techniques described herein are made up of one or more special computing devices. Special computing devices can be wired to perform the techniques, or can include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable port arrangements (FPGAs) that are persistently programmed to perform the techniques, or may include one or more general-purpose hardware processors programmed to perform the techniques according to program instructions in firmware, memory, other storage or a combination. These special computing devices can also combine custom wired logic, ASICs, or FPGAs with customized programming to fulfill the techniques. Special computing devices can be desktop computer systems, portable computer systems, handheld devices, networked devices or any other device that incorporates program logic and / or wired hardware to constitute the techniques.
For example, FIG. 6 is a block diagram illustrating a computer system 600. Computer system 600 includes a bus 602 or other communication mechanism for communicating information, and a hardware processor 604 coupled to bus 602 for processing information. The hardware processor 604 can be, for example, a general purpose microprocessor.
The computer system 600 also includes a main memory 606, such as a random access memory (RAM) or other dynamic storage device, coupled to a bus 602 to store information and instructions to be executed by the processor 604. The main memory 606 also can be used to store temporary variables or other intermediate information during the execution of instructions to be executed by the 604 processor. These instructions, when stored on non-transitory storage media accessible to the 604 processor, turn the 600 computer system into a purpose machine customized to perform the operations specified in the instructions.
The computer system 600 also includes a read-only memory (ROM) 608 or other static storage device coupled to a bus 602 to store static information and instructions for the processor 604. A storage device 610, such as a magnetic disk or an optical disk, is provided and coupled to the 602 bus to store information and instructions.
The computer system 600 can be coupled by a bus 602 to a display 612, such as a cathode ray tube (CRT), for the display of information to the computer user. An input device 614, including alphanumeric keys and other keys, is coupled to the bus 602 to communicate information and command selections for the 604 processor. Another type of user input device is the cursor control 616, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to the 604 processor and for controlling the movement of the cursor on the 612 display. The input device normally has two degrees of freedom on two axes, a first axis (eg, x) and a second axis (eg, y), which follow the device to specify positions on a plane.
A computer system 600 can develop the techniques described herein using custom wired logic, one or more ASICs or FPGAs, firmware and / or program logic which, in combination with the computer system makes or programs the computer system 600 into a special purpose machine. According to one embodiment, the techniques of the present are performed by computer system 600 in response to processor 604 executing one or more sequences of one or more instructions contained in main memory 606. These instructions can be read in main memory 606 from other storage media, such as a storage device 610. The execution of the instruction sequences contained in main memory 606 causes processor 604 to perform the steps of the process described herein. In alternative embodiments, wired circuits can be used in place or in combination with software instructions.
The term “storage media” as used herein, refers to any non-transitory media that stores data and / or instructions that make the machine operate in a specific way. Such storage media may comprise non-volatile media and / or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 610. Volatile media includes dynamic memory, with main memory 606. Common forms of storage media include, for example, diskettes, disks flexible, hard disk, solid state drives, magnetic tape or any other medium for storing magnetic data, a CD-ROM, any other optical data storage medium, any physical media with hole patterns, a RAM, a PROM , and EPROM, a FLASH-EPROM, NVRAM, any other chip or memory cartridge.
The storage media are different, but can be used in conjunction with a transmission medium. The transmission medium participates in the transfer of information between storage media. For example, the transmission medium includes coaxial cables, copper wires and optical fibers, including the wiring comprising the 602 bus. The transmission medium may also be in the form of acoustic or light waves, such as those generated during wave communications. radio and infrared data.
Various forms of media can be involved in carrying out one or more sequences of one or more instructions for the 604 processor to execute. For example, instructions can be initially carried out on a magnetic disk or on a remote computer's solid state drive. The remote computer can load the instructions into its dynamic memory and send the instructions over a phone line using a modem. A modem local to the computer system 600 can receive the data on the phone line and use an infrared transmitter to convert the data into an infrared signal. An infrared detector can receive the data carried on the infrared signal and the appropriate circuits can place the data on bus 602. Bus 602 carries the data to main memory 606, from which processor 604 retrieves and executes instructions. The instructions received by main memory 606 can optionally be stored in storage device 610 before or after execution by processor 604.
Computer system 600 also includes a communications interface 618 coupled to a bus 602. Communications interface 618 provides a dual-path data communications coupling for a network link 620 that is connected to a local area network 622. For example , the communications interface 618 can be an integrated services digital network card (ISDN), a cable modem, satellite modem or a modem to provide connection for data communications to a corresponding type of telephone line. As another example, the communications interface 618 may be a local area network (LAN) card to provide the data communications connection to a compatible LAN. Wireless links can also be considered. In any of these implementations, the communications interface 618 sends and receives electrical, electromagnetic or optical signals that carry digital data streams that represent various types of information.
The 620 network link typically provides data communication through one or more networks to other data devices. For example, network link 620 may provide a connection over LAN 622 to a host computer 624 or to data equipment operated by an Internet Service Provider (ISP) 626. ISP 626, in turn, provides data communications through the worldwide data communications network, currently referred to as “Internet” 628. Both local network 622 and Internet 628 use electrical, electromagnetic or optical signals that carry digital data streams. The signals over the various networks and the signals from the network link 620 and the communications interface 618, which carry digital data to and from the computer system 600, are examples of transmission media.
The computer system 600 can send messages and receive data, including program codes, over the network (s), network link 620 and communications interface 618. In the example of the Internet, a server 630 can transmit a requested code to an Internet application program 628, ISP 626, local network 622 and communications interface 618.
The received code can be executed by the processor 604 as received, and / or stored in the storage device 610, or in other non-volatile storage for later execution. EXTENSIONS AND ALTERNATIVES
In the above specifications, the embodiments of the invention have been described with reference to numerous specific details that may vary from embodiment to embodiment. Thus, the only and exclusive indicator of what the invention is about, and what is sought by the applicants as being the invention, is the establishment according to the claims that accompany this request, in the specific form presented in the claims, including all corrections subsequent All definitions expressly provided herein for the terms contained in the claims will govern the meaning of those terms as used in the claims. Thus, any limitation, element, property, characteristic, advantage or attribute that is not expressly mentioned in the claim should limit the scope of this claim in any way. Specifications and drawings are also seen as illustrative, rather than restrictive.
权利要求:
Claims (13)
[0001]
1. METHOD IMPLEMENTED BY COMPUTER, for viewing a customized product on site, characterized by comprising: storing the first data representing a marker connectivity graph (202) comprising detecting one or more points of light sampling and one or more dark sampling points; obtaining (104) a digital image of at least the marker; analysis (403) of the digital image to generate second data representing a candidate connectivity graph; based on at least part of the first data and the second data, determine (404) whether the candidate connectivity graph matches the reference connectivity graph; in response to the determination that the candidate connectivity graph matches the reference connectivity graph, generate (407) third data, which comprises a marker transform, that at least map the nodes of the reference connectivity graph with the nodes of the graph candidate connectivity; transformation (408), using the marker transform, where at least one light sampling point of the marker is at least one equivalent light sampling point in the digital image; sampling at least one pixel value in at least one equivalent light sampling point in the digital image to determine a white baseline value for the marker's digital image; transformation (409), using the marker transform, where at least one dark sampling point of the marker is at least one equivalent dark sampling point in the digital image; sampling at least one pixel value in at least one equivalent dark sampling point in the digital image to determine a black baseline value for the marker's digital image; create a visual component (107) that visualizes the customized product in the digital image using at least the third data, the white value of the baseline and the black value of the baseline to simulate the lighting in the environment where the digital image of the marker was gotten.
[0002]
2. METHOD, according to claim 1, characterized by the first data indicating one or more of: a count of the nodes (206A) of the reference connectivity graph, a count of the arcs (204) that connect to a given node of the graph of reference connectivity, a count of the lines or arcs (204) of the reference connectivity graph, a count of the polytopes (208A) of the reference connectivity graph, or a count of the nodes of a particular polytope (206A) of the reference graph reference connectivity.
[0003]
3. METHOD, according to claim 1, characterized by the second data indicating one or more of: a count of the nodes (206A) of the reference connectivity graph, a count of the arcs (204) connecting to a given node of the graph of reference connectivity, a count of the lines or arcs (204) of the reference connectivity graph, a count of the polytopes (208A) of the reference connectivity graph, or a count of the nodes of a particular polytope (206A) of the reference graph reference connectivity.
[0004]
4. METHOD, according to claim 1, characterized in that the marker comprises one or more colored open spaces to assist a digital image analysis technique applied to the digital image in detecting lighting in the environment in which the marker was photographed; and in which at least one or more colored open spaces are colored in a medium shade gray or a pastel shade to aid the digital image analysis technique in detecting lighting color trends in the environment in which the marker was photographed .
[0005]
5. METHOD, according to claim 1, characterized in that a shape of the marker's connectivity graph is distinct in orientation.
[0006]
6. METHOD, according to claim 1, characterized by still comprising: application of a limited bandpass filter or an edge filter for the digital image to produce a digital image that comprises linear characteristics in a range of line size of the black marker on white background; and use of a regulated cellular automaton with entrances with limited neighborhood to thin at least one of the linear characteristics.
[0007]
7. METHOD, according to claim 1, characterized by comprising the determination of an orientation or position of the marker in the digital image using the marker transform.
[0008]
8. METHOD, according to claim 1, characterized by understanding the use of a point mapping technique that involves the decomposition of a singular value to determine the marker transform.
[0009]
9. METHOD, according to claim 1, characterized by still comprising: use of a marker transform to transform coordinates from a point of light sampling in a coordinated space of the marker to a point in a coordinated space of the digital image; and sampling pixel values at a point in the coordinated space of the digital image to determine a base white value.
[0010]
10. METHOD, according to claim 1, characterized by the construction of the visual component comprising the insertion of a customized product reference in the digital image using the transform marker for insertion.
[0011]
11. METHOD, according to claim 1, characterized by further comprising: analysis of the digital image to detect one or more points of light sampling and one or more points of dark sampling; determination of a luminous gradient that exists between the sampling points; the construction of the visual component which includes modifying the luminance of the visual component to match the luminous gradient.
[0012]
12. COMPUTER READING STORAGE MEDIA, characterized by performing a method as mentioned in any of claims 1-11.
[0013]
13. ONE OR MORE DEVICES FOR COMPUTING WITH SPECIAL PURPOSES, characterized by comprising: one or more processors; means for carrying out any of the steps mentioned in any of claims 1-11.
类似技术:
公开号 | 公开日 | 专利标题
BR102012021901B1|2020-10-27|computer implemented method, computer read storage media, one or more special purpose computing devices and computer system
US9262854B2|2016-02-16|Systems, methods, and media for creating multiple layers from an image
US8422794B2|2013-04-16|System for matching artistic attributes of secondary image and template to a primary image
US8274523B2|2012-09-25|Processing digital templates for image display
US8538986B2|2013-09-17|System for coordinating user images in an artistic design
US8212834B2|2012-07-03|Artistic digital template for image display
US8849853B2|2014-09-30|Method for matching artistic attributes of a template and secondary images to a primary image
US8854395B2|2014-10-07|Method for producing artistic image template designs
US20110029914A1|2011-02-03|Apparatus for generating artistic image template designs
US8289340B2|2012-10-16|Method of making an artistic digital template for image display
US20110029860A1|2011-02-03|Artistic digital template for image display
US20110029562A1|2011-02-03|Coordinating user images in an artistic design
US8345057B2|2013-01-01|Context coordination for an artistic digital template for image display
US8332427B2|2012-12-11|Method of generating artistic template designs
BR102014000385A2|2015-07-14|Using infrared imaging to create digital images for use in product customization
US20190295321A1|2019-09-26|Mobile application for signage design solution using augmented reality
JP2002354229A|2002-12-06|Image forming method, image forming apparatus, and computer readable recording medium with image forming program stored thereon
同族专利:
公开号 | 公开日
US8654120B2|2014-02-18|
JP5646564B2|2014-12-24|
EP3462383A1|2019-04-03|
CA2785412C|2016-08-23|
AU2012211439B2|2015-02-19|
EP2565824B1|2018-12-19|
US9436963B2|2016-09-06|
BR102012021901A2|2014-12-09|
US20130050218A1|2013-02-28|
CA2785412A1|2013-02-28|
US20130050205A1|2013-02-28|
US20140160118A1|2014-06-12|
JP2013054740A|2013-03-21|
AU2012211439A1|2013-03-14|
EP2565824A1|2013-03-06|
US9147213B2|2015-09-29|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US5039132A|1990-01-25|1991-08-13|Minnesota Mining And Manufacturing Company|Method and form used for ordering custom printed products|
US5615123A|1991-04-02|1997-03-25|Creatacard, Inc.|System for creating and producing custom card products|
KR0183795B1|1995-12-28|1999-05-01|김광호|Color adjusting method and apparatus thereof|
US5999914A|1996-10-16|1999-12-07|Microsoft Corporation|Electronic promotion system for an electronic merchant system|
US6058373A|1996-10-16|2000-05-02|Microsoft Corporation|System and method for processing electronic order forms|
US5897622A|1996-10-16|1999-04-27|Microsoft Corporation|Electronic shopping and merchandising system|
GB9707704D0|1997-04-16|1997-06-04|British Telecomm|Display terminal|
US6345278B1|1998-06-04|2002-02-05|Collegenet, Inc.|Universal forms engine|
JP2000215317A|1998-11-16|2000-08-04|Sony Corp|Image processing method and image processor|
US7107226B1|1999-01-20|2006-09-12|Net32.Com, Inc.|Internet-based on-line comparison shopping system and method of interactive purchase and sale of products|
US6343287B1|1999-05-19|2002-01-29|Sun Microsystems, Inc.|External data store link for a profile service|
US6542515B1|1999-05-19|2003-04-01|Sun Microsystems, Inc.|Profile service|
US6725257B1|1999-11-30|2004-04-20|Chrome Data Corporation|Computationally efficient process and apparatus for configuring a product over a computer network|
US6459435B1|2000-01-11|2002-10-01|Bluebolt Networks, Inc.|Methods, systems and computer program products for generating storyboards of interior design surface treatments for interior spaces|
US7302114B2|2000-01-18|2007-11-27|Branders.Com, Inc.|Methods and apparatuses for generating composite images|
US20010034668A1|2000-01-29|2001-10-25|Whitworth Brian L.|Virtual picture hanging via the internet|
US7262778B1|2000-02-11|2007-08-28|Sony Corporation|Automatic color adjustment of a template design|
US6577726B1|2000-03-31|2003-06-10|Siebel Systems, Inc.|Computer telephony integration hotelling method and system|
US7526437B1|2000-04-06|2009-04-28|Apple Inc.|Custom stores|
US7024046B2|2000-04-18|2006-04-04|Real Time Image Ltd.|System and method for the lossless progressive streaming of images over a communication network|
US7016869B1|2000-04-28|2006-03-21|Shutterfly, Inc.|System and method of changing attributes of an image-based product|
US6711283B1|2000-05-03|2004-03-23|Aperio Technologies, Inc.|Fully automatic rapid microscope slide scanner|
US7117293B1|2000-05-12|2006-10-03|Apple Computer, Inc.|Method and apparatus for archiving and unarchiving objects|
US7617184B2|2000-05-18|2009-11-10|Endeca Technologies, Inc.|Scalable hierarchical data-driven navigation system and method for information retrieval|
US7062483B2|2000-05-18|2006-06-13|Endeca Technologies, Inc.|Hierarchical data-driven search and navigation system and method for information retrieval|
DE10034122A1|2000-07-13|2002-01-24|Hed Gmbh Haftetikettendruck|Set making labels for data-carrying media, e.g. CD or card, employing interactive printing program, includes durable, protective polyester covering material of specified thickness|
US20020073001A1|2000-12-13|2002-06-13|Itt Manufacturing Enterprises, Inc.|System and process for assisting a user to configure a configurable product|
US7761397B2|2001-03-21|2010-07-20|Huelsman David L|Rule processing method and apparatus providing automatic user input selections|
US7079139B2|2001-07-02|2006-07-18|Kaon Interactive, Inc.|Method and system for measuring an item depicted in an image|
US7580871B2|2001-08-31|2009-08-25|Siebel Systems, Inc.|Method to generate a customizable product configurator|
US7650296B1|2001-08-31|2010-01-19|Siebel Systems, Inc.|Configurator using structure and rules to provide a user interface|
US7274380B2|2001-10-04|2007-09-25|Siemens Corporate Research, Inc.|Augmented reality system|
US20070203771A1|2001-12-17|2007-08-30|Caballero Richard J|System and method for processing complex orders|
JP2003264740A|2002-03-08|2003-09-19|Cad Center:Kk|Observation scope|
US20030182402A1|2002-03-25|2003-09-25|Goodman David John|Method and apparatus for creating an image production file for a custom imprinted article|
US20070226155A1|2002-03-29|2007-09-27|Jai-Jein Yu|Extended attribute-based pricing system and method|
JP2004048674A|2002-05-24|2004-02-12|Olympus Corp|Information presentation system of visual field agreement type, portable information terminal, and server|
JP3962642B2|2002-07-08|2007-08-22|キヤノン株式会社|Image processing apparatus and method|
JP3944024B2|2002-08-20|2007-07-11|株式会社東芝|Image processing method, semiconductor device manufacturing method, pattern inspection apparatus, and program|
WO2004034221A2|2002-10-09|2004-04-22|Bodymedia, Inc.|Apparatus for detecting, receiving, deriving and displaying human physiological and contextual information|
US20040143644A1|2003-01-21|2004-07-22|Nec Laboratories America, Inc.|Meta-search engine architecture|
WO2004079526A2|2003-02-28|2004-09-16|Gannon Technologies Group|Systems and methods for source language word pattern matching|
JP2004282708A|2003-02-28|2004-10-07|Fuji Photo Film Co Ltd|Print producing apparatus and method, and information detecting apparatus, method and program|
DE60300984T2|2003-06-04|2006-04-27|Sap Ag|Method and computer system for optimizing a Boolean expression for request processing|
US20050138078A1|2003-07-09|2005-06-23|Chad Christenson|Catalog management module in a custom product configuration system|
WO2005079939A2|2004-01-27|2005-09-01|Opentv, Inc.|Pre-generated game creation methods and apparatus|
US7734731B2|2004-03-18|2010-06-08|Avaya Inc.|Method and apparatus for a publish-subscribe system with third party subscription delivery|
US20050226498A1|2004-03-31|2005-10-13|Lee David L|Prepress workflow process employing frequency modulation screening techniques|
WO2005111922A1|2004-05-18|2005-11-24|Silverbrook Research Pty Ltd|Pharmaceutical product tracking|
US7474318B2|2004-05-28|2009-01-06|National University Of Singapore|Interactive system and method|
US20060004697A1|2004-06-09|2006-01-05|Lipsky Scott E|Method and system for restricting the display of images|
US7277830B2|2004-08-17|2007-10-02|Dirtt Environmental Solutions Ltd.|Capturing a user's design intent with resolvable objects|
US7299171B2|2004-08-17|2007-11-20|Contentguard Holdings, Inc.|Method and system for processing grammar-based legality expressions|
US8631347B2|2004-11-15|2014-01-14|Microsoft Corporation|Electronic document style matrix|
US8982109B2|2005-03-01|2015-03-17|Eyesmatch Ltd|Devices, systems and methods of capturing and displaying appearances|
US20060197775A1|2005-03-07|2006-09-07|Michael Neal|Virtual monitor system having lab-quality color accuracy|
US8963926B2|2006-07-11|2015-02-24|Pandoodle Corporation|User customized animated video and method for making the same|
US20070033568A1|2005-07-30|2007-02-08|Barrieau Shawn M|System and method for managing product customization|
AU2006292351A1|2005-09-16|2007-03-29|Wizard International, Inc.|Framed art visualization software|
US8023746B2|2005-10-14|2011-09-20|Disney Enterprises, Inc.|Systems and methods for decoding an image to determine a digital identifier|
US7769236B2|2005-10-31|2010-08-03|National Research Council Of Canada|Marker and method for detecting said marker|
US7502788B2|2005-11-08|2009-03-10|International Business Machines Corporation|Method for retrieving constant values using regular expressions|
US20070124215A1|2005-11-29|2007-05-31|Simmons Lawrence D Jr|Virtual shopping with personal image environment|
US7457730B2|2005-12-15|2008-11-25|Degnan Donald A|Method and system for virtual decoration|
US7801912B2|2005-12-29|2010-09-21|Amazon Technologies, Inc.|Method and apparatus for a searchable data service|
US20070174781A1|2006-01-25|2007-07-26|Catalog Data Solutions|Parameter visualization|
JP2007257176A|2006-03-22|2007-10-04|Fujitsu Ltd|Information processing method, information processor and information processing program|
US7702645B2|2006-06-30|2010-04-20|Nokia Corporation|Method, apparatus and computer program product for making semantic annotations for easy file organization and search|
JP4926817B2|2006-08-11|2012-05-09|キヤノン株式会社|Index arrangement information measuring apparatus and method|
US20080140492A1|2006-09-29|2008-06-12|Armand Rousso|Systems, methods and apparatuses forimportation and exportation transaction facilitation|
US20080091551A1|2006-09-29|2008-04-17|Marvin Olheiser|Knowledge-based customizable product design system|
GB2443846B|2006-11-15|2011-12-07|Joseph Timothy Poole|Computing system|
US7930313B1|2006-11-22|2011-04-19|Adobe Systems Incorporated|Controlling presentation of refinement options in online searches|
KR100918392B1|2006-12-05|2009-09-24|한국전자통신연구원|Personal-oriented multimedia studio platform for 3D contents authoring|
US7885956B2|2007-03-05|2011-02-08|Kelora Systems, Llc|Display and search interface for product database|
US20090080773A1|2007-09-20|2009-03-26|Mark Shaw|Image segmentation using dynamic color gradient threshold, texture, and multimodal-merging|
DE102007045834B4|2007-09-25|2012-01-26|Metaio Gmbh|Method and device for displaying a virtual object in a real environment|
US8917424B2|2007-10-26|2014-12-23|Zazzle.Com, Inc.|Screen printing techniques|
US8174521B2|2007-10-26|2012-05-08|Zazzle.Com|Product modeling system and method|
BR112014004988A2|2011-08-31|2017-05-30|Zazzle Com Inc|tile process for digital image recovery|
US9213920B2|2010-05-28|2015-12-15|Zazzle.Com, Inc.|Using infrared imaging to create digital images for use in product customization|
US9147213B2|2007-10-26|2015-09-29|Zazzle Inc.|Visualizing a custom product in situ|
EP2223239A4|2007-11-07|2012-08-22|Skinit Inc|Customizing print content|
US7856434B2|2007-11-12|2010-12-21|Endeca Technologies, Inc.|System and method for filtering rules for manipulating search results in a hierarchical search and navigation system|
WO2009094724A1|2008-02-01|2009-08-06|Innovation Studios Pty Ltd|Method for online selection of items and an online shopping system using the same|
KR100927009B1|2008-02-04|2009-11-16|광주과학기술원|Haptic interaction method and system in augmented reality|
US8521600B2|2008-04-23|2013-08-27|Hodge Products, Inc.|Online ordering system and method for keyed devices|
US20090289955A1|2008-05-22|2009-11-26|Yahoo! Inc.|Reality overlay device|
US8064733B2|2008-06-24|2011-11-22|Microsoft Corporation|Variable resolution images|
US7933473B2|2008-06-24|2011-04-26|Microsoft Corporation|Multiple resolution image storage|
US8233722B2|2008-06-27|2012-07-31|Palo Alto Research Center Incorporated|Method and system for finding a document image in a document collection using localized two-dimensional visual fingerprints|
CN102177525A|2008-07-29|2011-09-07|彩滋网站公司|Product customization system and method|
US8694893B2|2008-08-08|2014-04-08|Oracle International Corporation|Interactive product configurator with persistent component association|
US8335724B2|2008-08-13|2012-12-18|Branders.Com, Inc.|Customized virtual catalog|
US8520979B2|2008-08-19|2013-08-27|Digimarc Corporation|Methods and systems for content processing|
US20100048290A1|2008-08-19|2010-02-25|Sony Computer Entertainment Europe Ltd.|Image combining method, system and apparatus|
JP5703220B2|2008-08-22|2015-04-15|ザズル インコーポレイテッド|Product customization system and method|
US20100066750A1|2008-09-16|2010-03-18|Motorola, Inc.|Mobile virtual and augmented reality system|
US20100066731A1|2008-09-16|2010-03-18|James Calvin Vecore|Configurator Process and System|
US8873829B1|2008-09-26|2014-10-28|Amazon Technologies, Inc.|Method and system for capturing and utilizing item attributes|
US8422777B2|2008-10-14|2013-04-16|Joshua Victor Aller|Target and method of detecting, identifying, and determining 3-D pose of the target|
US20100114874A1|2008-10-20|2010-05-06|Google Inc.|Providing search results|
US9702071B2|2008-10-23|2017-07-11|Zazzle Inc.|Embroidery system and method|
JP2010117870A|2008-11-12|2010-05-27|B-Core Inc|Included type code symbol and its reading method, closed area inclusion type code symbol and its reading method, article added with included type code symbol, and article added with closed area inclusion type code symbol|
US8384947B2|2008-11-17|2013-02-26|Image Trends, Inc.|Handheld scanner and system comprising same|
US20100145492A1|2008-12-09|2010-06-10|The Boeing Company|Automated Custom Design Generation|
US8606657B2|2009-01-21|2013-12-10|Edgenet, Inc.|Augmented reality method and system for designing environments and buying/selling goods|
WO2010087886A1|2009-01-27|2010-08-05|Gannon Technologies Group Llc|Systems and methods for graph-based pattern recognition technology applied to the automated identification of fingerprints|
US8681145B2|2009-03-20|2014-03-25|Disney Enterprises, Inc.|Attribute transfer between computer models including identifying isomorphic regions in polygonal meshes|
JP5466418B2|2009-03-27|2014-04-09|株式会社成基総研|Determination device, determination method, and program|
JP2010287174A|2009-06-15|2010-12-24|Dainippon Printing Co Ltd|Furniture simulation method, device, program, recording medium|
US8214069B2|2009-10-23|2012-07-03|Certusoft, Inc.|Automated hierarchical configuration of custom products with complex geometries: method and apparatus|
US8497876B2|2009-11-02|2013-07-30|Pacific Data Images Llc|Infinite complexity deep-framebuffer rendering|
US8700492B1|2009-12-09|2014-04-15|Amazon Technologies, Inc.|Customized product display|
US20110225038A1|2010-03-15|2011-09-15|Yahoo! Inc.|System and Method for Efficiently Evaluating Complex Boolean Expressions|
US9171361B2|2010-04-23|2015-10-27|Flir Systems Ab|Infrared resolution and contrast enhancement with fusion|
US8429110B2|2010-06-10|2013-04-23|Microsoft Corporation|Pattern tree-based rule learning|
EP2395474A3|2010-06-11|2014-03-26|Nintendo Co., Ltd.|Storage medium having image recognition program stored therein, image recognition apparatus, image recognition system, and image recognition method|
US8190486B1|2010-07-15|2012-05-29|Myworld, Inc.|Techniques for product selection|
US8290822B2|2010-08-20|2012-10-16|Valuemomentum, Inc.|Product configuration server for efficiently displaying selectable attribute values for configurable products|
US8566714B1|2010-09-20|2013-10-22|Insignia Group, L.C.|Catalog accessories meta-configurator, and its use thereof|
JP5646263B2|2010-09-27|2014-12-24|任天堂株式会社|Image processing program, image processing apparatus, image processing system, and image processing method|
WO2012064893A2|2010-11-10|2012-05-18|Google Inc.|Automated product attribute selection|
US20120123674A1|2010-11-15|2012-05-17|Microsoft Corporation|Displaying product recommendations on a map|
JP5697487B2|2011-02-25|2015-04-08|任天堂株式会社|Image processing system, image processing method, image processing apparatus, and image processing program|
US8831279B2|2011-03-04|2014-09-09|Digimarc Corporation|Smartphone-based methods and systems|
CN103493463A|2011-04-25|2014-01-01|阿尔卡特朗讯|Privacy protection in recommendation services|
US8787707B1|2011-06-29|2014-07-22|Amazon Technologies, Inc.|Identification of product attributes|
US9002896B2|2011-08-23|2015-04-07|Xerox Corporation|Knowledge-assisted approach to dynamically create data sources for variable-data marketing campaigns|
US9348890B2|2011-08-30|2016-05-24|Open Text S.A.|System and method of search indexes using key-value attributes to searchable metadata|
WO2013067437A1|2011-11-02|2013-05-10|Hoffman Michael Theodor|Systems and methods for dynamic digital product synthesis, commerce, and distribution|
CN103890709B|2011-11-07|2016-08-17|英派尔科技开发有限公司|Key value database based on caching maps and replicates|
US8495072B1|2012-01-27|2013-07-23|International Business Machines Corporation|Attribute-based identification schemes for objects in internet of things|
CN103970604B|2013-01-31|2017-05-03|国际商业机器公司|Method and device for realizing image processing based on MapReduce framework|
US8712566B1|2013-03-14|2014-04-29|Zazzle Inc.|Segmentation of a product markup image based on color and color differences|
JP6281225B2|2013-09-30|2018-02-21|日本電気株式会社|Information processing device|US11157977B1|2007-10-26|2021-10-26|Zazzle Inc.|Sales system using apparel modeling system and method|
US9213920B2|2010-05-28|2015-12-15|Zazzle.Com, Inc.|Using infrared imaging to create digital images for use in product customization|
BR112014004988A2|2011-08-31|2017-05-30|Zazzle Com Inc|tile process for digital image recovery|
US9147213B2|2007-10-26|2015-09-29|Zazzle Inc.|Visualizing a custom product in situ|
US8174521B2|2007-10-26|2012-05-08|Zazzle.Com|Product modeling system and method|
US9495386B2|2008-03-05|2016-11-15|Ebay Inc.|Identification of items depicted in images|
US10719862B2|2008-07-29|2020-07-21|Zazzle Inc.|System and method for intake of manufacturing patterns and applying them to the automated production of interactive, customizable product|
US9164577B2|2009-12-22|2015-10-20|Ebay Inc.|Augmented reality system, method, and apparatus for displaying an item image in a contextual environment|
US10127606B2|2010-10-13|2018-11-13|Ebay Inc.|Augmented reality system and method for visualizing an item|
US9449342B2|2011-10-27|2016-09-20|Ebay Inc.|System and method for visualization of items in an environment using augmented reality|
US9240059B2|2011-12-29|2016-01-19|Ebay Inc.|Personal augmented reality|
US10969743B2|2011-12-29|2021-04-06|Zazzle Inc.|System and method for the efficient recording of large aperture wave fronts of visible and near visible light|
JP5912059B2|2012-04-06|2016-04-27|ソニー株式会社|Information processing apparatus, information processing method, and information processing system|
US9336541B2|2012-09-21|2016-05-10|Paypal, Inc.|Augmented reality product instructions, tutorials and visualizations|
US8712566B1|2013-03-14|2014-04-29|Zazzle Inc.|Segmentation of a product markup image based on color and color differences|
US9330407B2|2013-03-15|2016-05-03|Zazzle Inc.|Specification and display of product customization options|
WO2014144936A1|2013-03-15|2014-09-18|Videri Inc.|Systems and methods for displaying, distributing, viewing and controlling digital art and imaging|
US8958663B1|2013-09-24|2015-02-17|Zazzle Inc.|Automated imaging of customizable products|
US9767430B2|2013-11-11|2017-09-19|International Business Machines Corporation|Contextual searching via a mobile computing device|
US9204018B2|2014-01-21|2015-12-01|Carbon Objects, Inc.|System and method of adjusting the color of image objects based on chained reference points, gradient characterization, and pre-stored indicators of environmental lighting conditions|
US9639957B2|2014-06-12|2017-05-02|A9.Com, Inc.|Recommendations utilizing visual image analysis|
US9956788B2|2015-11-24|2018-05-01|Stitch City Industries|Devices and methods for printing on boards|
US10404938B1|2015-12-22|2019-09-03|Steelcase Inc.|Virtual world method and system for affecting mind state|
US10156841B2|2015-12-31|2018-12-18|General Electric Company|Identity management and device enrollment in a cloud service|
US10181218B1|2016-02-17|2019-01-15|Steelcase Inc.|Virtual affordance sales tool|
US10182210B1|2016-12-15|2019-01-15|Steelcase Inc.|Systems and methods for implementing augmented reality and/or virtual reality|
US11158308B1|2019-11-27|2021-10-26|Amazon Technologies, Inc.|Configuring natural language system|
DE102020112700A1|2020-05-11|2021-11-11|painted life pieces GbR |Print processing apparatus and method|
法律状态:
2014-12-09| B03A| Publication of a patent application or of a certificate of addition of invention [chapter 3.1 patent gazette]|
2015-03-24| B25F| Entry of change of name and/or headquarter and transfer of application, patent and certif. of addition of invention: change of name on requirement|Owner name: ZAZZIE.COM, INC (US) Free format text: A FIM DE ATENDER A ALTERACAO DE NOME REQUERIDA ATRAVES DA PETICAO NO 860140108430, DE 02/07/2014, E NECESSARIO APRESENTAR A TRADUCAO DO DOCUMENTO DE ALTERACAO DE NOME E O MESMO DEVE TER A LEGALIZACAO CONSULAR. |
2015-10-27| B25D| Requested change of name of applicant approved|Owner name: ZAZZLE INC (US) |
2016-09-20| B08F| Application dismissed because of non-payment of annual fees [chapter 8.6 patent gazette]|Free format text: REFERENTE A 4A ANUIDADE. |
2016-11-16| B08G| Application fees: restoration [chapter 8.7 patent gazette]|
2018-12-11| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2019-10-15| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2020-06-23| B06A| Patent application procedure suspended [chapter 6.1 patent gazette]|
2020-10-06| B09A| Decision: intention to grant [chapter 9.1 patent gazette]|
2020-10-27| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 30/08/2012, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
申请号 | 申请日 | 专利标题
US201161529883P| true| 2011-08-31|2011-08-31|
US61/529,883|2011-08-31|
US13/539,788|2012-07-02|
US13/539,788|US9147213B2|2007-10-26|2012-07-02|Visualizing a custom product in situ|
[返回顶部]